Previous Blogs

December 6, 2023
Lattice Semi Expands Line of Midrange FPGAs

November 30, 2023
The Amazon AWS GenAI Strategy Comes with a Big Q

November 28, 2023
AWS Introduces Helpful AI Training Tools to Guide Users

November 13, 2023
IBM Extends Its Goals for AI and Quantum Computing

November 7, 2023
The Rapidly Evolving State of Generative AI

November 2, 2023
Cisco’s Webex Extends Generative AI into Collaboration

October 31, 2023
Lenovo Unites Businesses and AI Strategy

October 24, 2023
Qualcomm’s Snapdragon X Elite Solidifies New Era of AI PCs

October 10, 2023
HP Highlights PC Design Innovation

September 22, 2023
Microsoft Copilot Updates Push GenAI to the Mainstream

September 19, 2023
Intel Hopes to Reinvent the PC with Core Ultra SOC

September 6, 2023
Google Starts GenAI Productivity Onslaught with Duet AI for Workspace Release

August 16, 2023
Why Generative AI is so Unlike Other Major Tech Trends

August 9, 2023
Nvidia Enhances GenAI Offerings for Enterprise

July 31, 2023
Challenges Remain for Generative AI Tools

July 27, 2023
Generative AI Study Uncovers Surprising Facts on Business Usage

July 26, 2023
Samsung Works to Bring Foldables to the Mainstream

June 21, 2023
HPE Melds Supercomputing and Generative AI

June 14, 2023
AMD Delivers Generative AI Vision

June 6, 2023
Apple wants to redefine computing with Vision Pro headset

June 1, 2023
Hybrid AI is moving generative AI tech from the cloud to our devices

May 23, 2023
Dell and Nvidia Partner to Create Generative AI Solutions for Businesses

May 9, 2023
IBM Unleashes Generative AI Strategy With watsonx

May 4, 2023
Amazon’s Generative AI Strategy Focuses on Choice

April 20, 2023
Latest Cadence Tools Bring Generative AI to Chip and System Design

March 30, 2023
Amazon Enables Sidewalk Network for IoT Applications

March 16, 2023
Microsoft 365 Copilot Enables the Digital Assistants We’ve Always Wanted

March 14, 2023
Google Unveils Generative AI Tools for Workspace and GCP

March 9, 2023
Lenovo Revs Desktop Workstations with Aston Martin

March 1, 2023
MWC Analysis: The Computerized, Cloudified 5G Network is Getting Real

February 23, 2023
Early MWC News Shows Renewed Emphasis on 5G Infrastructure

February 1, 2023
Samsung Looking to Impact the PC Market

January 18, 2023
The Surprise Winner for Generative AI

January 5, 2023
AI To Go Mainstream in 2023

2022 Blogs

2021 Blogs

2020 Blogs

2019 Blogs

2018 Blogs

2017 Blogs

2016 Blogs

2015 Blogs

2014 Blogs

2013 Blogs


















TECHnalysis Research Blog

December 7, 2023
AMD Makes Definitive GenAI Statement

By Bob O'Donnell

Lately, whenever a major tech industry company hosts a large event, it almost inevitably ends up discussing their strategy and latest products focused on generative AI. Such was the case with major semiconductor provider AMD, as they held their Advancing AI event in San Jose. The company officially unveiled their previously announced Instinct MI300 line of GPU-based AI accelerators for the data center, discussed the growing software ecosystem for the products, laid out their roadmap for AI accelerated PC silicon, and sprinkled in several intriguing technological advancements along the way.

In truth, there was a relative scarcity of “truly new” news, and yet you couldn’t help but walk away from the event feeling impressed. AMD told a solid, comprehensive product and technology story, highlighted a large (perhaps even too large?) number of clients/partners, and demonstrated the scrappy, competitive ethos of the company under CEO Lisa Su. On a practical level, I also walked away even more certain that the company is going to be a serious competitor to Nvidia on the AI training and inference front, an ongoing leader in supercomputing and other high-performance computing (HPC) applications, and an increasingly capable competitor in the upcoming AI PC market. Not bad for a 2-hour keynote.

Not surprisingly, most of the event’s focus was on the new Instinct MI300X, which is clearly positioned as a competitor to Nvidia’s market dominating GPU-based AI accelerators, such as their H100. While much of the tech world has become infatuated with the GenAI performance that the combination of Nvidia’s hardware and CUDA software have enabled, there’s also a rapidly growing recognition that their utter dominance of the market isn’t healthy for the long term. As a result, there’s been a lot of pressure for AMD to come up with something that’s a reasonable alternative, particularly because AMD is generally seen as the only serious competitor to Nvidia on the GPU front.

Thankfully, the MI300X undoubtedly triggered enormous sighs of relief heard ‘round the world as initial benchmarks suggest that AMD achieved exactly what many were hoping for. Specifically, AMD touted that they could match the performance of Nvidia’s H100 on AI Model Training and offered up to a 60% improvement on AI inference workloads. In addition, AMD touted that combining eight MI300X cards into a system would enable the fastest Generative AI computer in the world and offer access to significantly more high-speed memory than the current Nvidia alternative. To be fair, Nvidia has already announced the GH200 (codenamed “Grace Hopper”) that will inevitably offer even better performance, but as is almost inevitably the case in the semiconductor world, this is bound to be a game of performance leapfrog for many years to come. Regardless of how people choose to accept or challenge the benchmarks, the key point here is that AMD is now ready to play the game.

Given that level of performance, it wasn’t terribly surprising to see AMD parade a long list of partners across the stage. From major cloud providers like Microsoft Azure, Oracle Cloud and Meta to enterprise server partners like Dell Technologies, Lenovo and SuperMicro, there was nothing but praise and excitement from these partners. Of course, it’s easy to understand given that these are companies who are eager for an alternative and additional supplier to help them meet the staggering demand they now have for GenAI-optimized systems.

In addition to the MI300X, AMD also discussed their Instinct MI300A, which is the company’s first APU designed for the data center. The MI300A leverages the same type of GPU XCD (Accelerator Complex Die) elements as the MI300X, but includes six instead of eight and uses the additional die space to incorporate eight Zen4 CPU cores. In addition, through the use of AMD’s Infinity Fabric chip-to-chip interconnect technology, it provides shared and simultaneous access to a large, shared set of high bandwidth memory, or HBM, for the entire system. (One of the interesting technological sidenotes from the event was that AMD announced plans to open up the previously proprietary Infinity Fabric technology to a limited set of partners. While no details are known just yet, it could conceivably lead to some interesting new multi-vendor chiplet designs in the future.)

This simultaneous CPU and GPU memory access is essential for HPC-type applications and that capability is apparently one of the reasons that Lawrence Livermore National Labs chose the MI300A to be at the core of its new El Capitan supercomputer being built in conjunction with HPE. El Capitan is expected to be both the fastest and one of the most power efficient supercomputers in the world.

On the software side of things, AMD also made numerous announcements around its ROCm software platform for GenAI, which has now been upgraded to version 6. As with the new hardware, they discussed several key partnerships that build on previous news (such as with open-source model provider Hugging Face and the PyTorch AI development platform) as well as debuting some key new ones. Most notable was that OpenAI said it was going to bring native support for AMD’s latest hardware to version 3.0 of its own Triton development platform. This will make it trivial for the many programmers and organizations eager to jump on the OpenAI bandwagon to leverage AMD’s latest (and gives them an alternative to the Nvidia-only choices they’ve had up until now).

The final portion of AMD’s announcements covered their advancements in AI PCs. Though the company doesn’t get much credit or recognition for it, they were actually the first to incorporate a dedicated NPU into a PC chip with last year’s launch of the Ryzen 7040. The XDNA AI acceleration block it includes leverages technology that AMD acquired through its Xilinx purchase. At this year’s event, the company announced the new Ryzen 8040 which includes an upgraded NPU with 60% better AI performance. Interestingly, they also previewed their subsequent generation codenamed “Strix Point," which isn’t expected until the end of 2024. The XDNA2 architecture it will include is expected to offer an impressive 3x improvement versus the 7040. Given that company still needs to sell 8040-based systems in the meantime, you could argue that the “teaser” of the new chip was a bit unusual. However, what I think AMD wanted to do—and what I believe they achieved—in making the preview was to hammer home the point that this is a incredibly fast moving market and they’re ready to compete. (Of course, it was also a shot across the competitive bow to both Intel and Qualcomm, both of whom will introduce NPU-accelerated PC chips over the next few months.)

Once again, in addition to the hardware, AMD discussed some intriguing AI software advancements for the PC, including the official release of Ryzen AI 1.0 software for easing the use of and accelerating the performance GenAI-based models and applications on PCs. AMD also brought Microsoft’s new Windows leader Pavan Davuluri onstage to talk about their work to provide native support for AMD’s XDNA accelerators in future version of Windows as well as discuss the growing topic of hybrid AI, where companies expect to be able to split certain types of AI workloads between the cloud and client PCs. There’s much more to be done here—and across the world of AI PCs—but it’s definitely going to be an interesting area to watch in 2024.

All told, the AMD AI story was an impressive one and it was undoubtedly told with a great deal of enthusiasm. From an industry perspective, it’s great to see additional competition, as it will inevitably lead to even faster developments in this exciting new space (if that’s even possible!). In order to really make a difference, however, AMD needs to continue executing well to its vision. I’m certainly confident it’s possible, but there’s a lot of work still ahead of them.

Here's a link to the original column: https://seekingalpha.com/article/4656567-amd-definitive-gen-ai-statement

Bob O’Donnell is the president and chief analyst of TECHnalysis Research, LLC a market research firm that provides strategic consulting and market research services to the technology industry and professional financial community. You can follow him on LinkedIn at Bob O’Donnell or on Twitter @bobodtech.